126 research outputs found

    Adaptive two-pass rank order filter to remove impulse noise in highly corrupted images

    Get PDF
    This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of Brunel University's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to [email protected]. © 2004 IEEE.In this paper, we present an adaptive two-pass rank order filter to remove impulse noise in highly corrupted images. When the noise ratio is high, rank order filters, such as the median filter for example, can produce unsatisfactory results. Better results can be obtained by applying the filter twice, which we call two-pass filtering. To further improve the performance, we develop an adaptive two-pass rank order filter. Between the passes of filtering, an adaptive process is used to detect irregularities in the spatial distribution of the estimated impulse noise. The adaptive process then selectively replaces some pixels changed by the first pass of filtering with their original observed pixel values. These pixels are then kept unchanged during the second filtering. In combination, the adaptive process and the sec ond filter eliminate more impulse noise and restore some pixels that are mistakenly altered by the first filtering. As a final result, the reconstructed image maintains a higher degree of fidelity and has a smaller amount of noise. The idea of adaptive two-pass processing can be applied to many rank order filters, such as a center-weighted median filter (CWMF), adaptive CWMF, lower-upper-middle filter, and soft-decision rank-order-mean filter. Results from computer simulations are used to demonstrate the performance of this type of adaptation using a number of basic rank order filters.This work was supported in part by CenSSIS, the Center for Subsurface Sensing and Imaging Systems, under the Engineering Research Centers Program of the National Science Foundation (NSF) under Award EEC-9986821, by an ARO MURI on Demining under Grant DAAG55-97-1-0013, and by the NSF under Award 0208548

    Boosting Continuous Control with Consistency Policy

    Full text link
    Due to its training stability and strong expression, the diffusion model has attracted considerable attention in offline reinforcement learning. However, several challenges have also come with it: 1) The demand for a large number of diffusion steps makes the diffusion-model-based methods time inefficient and limits their applications in real-time control; 2) How to achieve policy improvement with accurate guidance for diffusion model-based policy is still an open problem. Inspired by the consistency model, we propose a novel time-efficiency method named Consistency Policy with Q-Learning (CPQL), which derives action from noise by a single step. By establishing a mapping from the reverse diffusion trajectories to the desired policy, we simultaneously address the issues of time efficiency and inaccurate guidance when updating diffusion model-based policy with the learned Q-function. We demonstrate that CPQL can achieve policy improvement with accurate guidance for offline reinforcement learning, and can be seamlessly extended for online RL tasks. Experimental results indicate that CPQL achieves new state-of-the-art performance on 11 offline and 21 online tasks, significantly improving inference speed by nearly 45 times compared to Diffusion-QL. We will release our code later.Comment: 18 pages, 9 page
    • …
    corecore